You completely described a coded thought process that could easily be implemented into AI.
Agreed...AI could do that. But would it? AI can give a wonderful emulation of intelligence. You can program it to act frightened in the face of danger or grieved in the loss of a person or animal. But it can't learn to actually feel danger or grief.
There would have to be some curiosity on its part or why would our AI go through the trouble of alarm over detecting a flying object it couldn't identify? Even if it's programmed, the UFO would come across as UNKNOWN. The robot might catalog it, note the location, estimated speed, direction and time, but it can't feel anything. The UFO could land on the lawn of the White House and a Bigfoot could come out of it and our AI robot could feel no more than a toaster! Sure, it could be designed and programmed to act surprised, but code can't feel. The experience can't elicit an emotional response.
To understand intelligence, we must first define it. Going back to Flynn's book, he notes that's one of the primary problems. He recounts some questions and answers with Soviet peasants back in the 70s and recognizes that despite cognitive differences which make the answers hilarious to us, they made perfect sense to them. Even so, they knew the difference between analytic and synthetic propositions, but did not employ the same understanding of logic. "Pure logic cannot tell us anything about facts; only experience can," Flynn writes.
Q: What do a chicken and a dog have in common?
A: They are not alike. A chicken has two legs, a dog has four. A chicken has wings but a dog doesn't. A dog has big ears and a chicken's are small.
Q: Is there one word you could use for them both?
A: No, of course not.
Q: Would the word "animal" fit?
A: Yes.
It's a great book. Hadn't read it for a while; I'm glad the topic came up.